1,900 research outputs found

    Benchmarking the BFGS Algorithm on the BBOB-2009 Function Testbed

    Get PDF
    International audienceThe BFGS quasi-Newton method is benchmarked on the noiseless BBOB-2009 testbed. A multistart strategy is applied with a maximum number of function evaluations of 10^5 times the search space dimension, resulting in the algorithm solving six functions

    Benchmarking the NEWUOA on the BBOB-2009 Noisy Testbed

    Get PDF
    International audienceThe NEWUOA which belongs to the class of Derivative-Free optimization algorithms is benchmarked on the BBOB-2009 noisy testbed. A multistart strategy is applied with a maximum number of function evaluations of 10^4 times the search space dimension

    Benchmarking sep-CMA-ES on the BBOB-2009 Noisy Testbed

    Get PDF
    International audienceA partly time and space linear CMA-ES is benchmarked on the BBOB-2009 noisy function testbed. This algorithm with a multistart strategy with increasing population size solves 10 functions out of 30 in 20-D

    Benchmarking the BFGS Algorithm on the BBOB-2009 Noisy Testbed

    Get PDF
    International audienceThe BFGS quasi-Newton method is benchmarked on the noisy BBOB-2009 testbed. A multistart strategy is applied with a maximum number of function evaluations of about 10^4 times the search space dimension

    Benchmarking the NEWUOA on the BBOB-2009 Function Testbed

    Get PDF
    International audienceThe NEWUOA which belongs to the class of Derivative-Free optimization algorithms is benchmarked on the BBOB-2009 noisefree testbed. A multistart strategy is applied with a maximum number of function evaluations of up to 10^5 times the search space dimension resulting in the algorithm solving 11 functions in 20-D. The results of the algorithm using the recommended number of interpolation points for the underlying model and the full model are shown and discussed

    Benchmarking sep-CMA-ES on the BBOB-2009 Function Testbed

    Get PDF
    International audienceA partly time and space linear CMA-ES is benchmarked on the BBOB-2009 noiseless function testbed. This algorithm with a multistart strategy with increasing population size solves 17 functions out of 24 in 20-D

    COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting

    Get PDF
    We introduce COCO, an open source platform for Comparing Continuous Optimizers in a black-box setting. COCO aims at automatizing the tedious and repetitive task of benchmarking numerical optimization algorithms to the greatest possible extent. The platform and the underlying methodology allow to benchmark in the same framework deterministic and stochastic solvers for both single and multiobjective optimization. We present the rationales behind the (decade-long) development of the platform as a general proposition for guidelines towards better benchmarking. We detail underlying fundamental concepts of COCO such as the definition of a problem as a function instance, the underlying idea of instances, the use of target values, and runtime defined by the number of function calls as the central performance measure. Finally, we give a quick overview of the basic code structure and the currently available test suites.Comment: Optimization Methods and Software, Taylor & Francis, In press, pp.1-3

    PSO Facing Non-Separable and Ill-Conditioned Problems

    Get PDF
    This report investigates the behavior of particle swarm optimization (PSO) on ill-conditioned functions. We find that PSO performs very well on separable, ill-conditioned functions. If the function is rotated such that it becomes non-separable, the performance declines dramatically. On non-separable, ill-conditioned functions we find the search costs (number of function evaluations) of PSO increasing roughly proportional with the condition number. We never observe premature convergence, but on non-separable, ill-conditioned problems PSO is outperformed by a contemporary evolution strategy by orders of magnitude. The strong dependency of PSO on rotations originates from random events that are only independent within the given coordinate system. We argue that invariance properties, like rotational invariance, are desirable, because they increase the predictive power of performance results

    A Simple Modification in CMA-ES Achieving Linear Time and Space Complexity

    Get PDF
    This report proposes a simple modification of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) for high dimensional objective functions that reduces the internal time and space complexity from quadratic to linear. The covariance matrix is constrained to be diagonal and the resulting algorithm, sep-CMA-ES, samples each coordinate independently. Because the model complexity is reduced, the learning rate for the covariance matrix can be increased. Consequently, on essentially separable functions, sep-CMA-ES significantly outperforms CMA-ES. For dimension larger than 100, even on the non-separable Rosenbrock function, the sep-CMA-ES needs fewer function evaluations than CMA-ES

    Benchmarking a Weighted Negative Covariance Matrix Update on the BBOB-2010 Noisy Testbed

    Get PDF
    In a companion paper, we presented a weighted negative update of the covariance matrix in the CMA-ES—weighted active CMA-ES or, in short, aCMA-ES. In this paper, we benchmark the IPOP-aCMA-ES on the BBOB-2010 noisy testbed in search space dimension between 2 and 40 and compare its performance with the IPOP-CMA-ES. The aCMA suffers from a moderate performance loss, of less than a factor of two, on the sphere function with two different noise models. On the other hand, the aCMA enjoys a (significant) performance gain, up to a factor of four, on 13 unimodal functions in various dimensions, in particular the larger ones. Compared to the best performance observed during BBOB-2009, the IPOP-aCMA-ES sets a new record on overall ten functions. The global picture is in favor of aCMA which might establish a new standard also for noisy problems
    • …
    corecore